从历史上看,患者数据集已用于开发和验证PET/MRI和PET/CT的各种重建算法。为了使这种算法开发,无需获得数百个患者检查,在本文中,我们展示了一种深度学习技术,可以从丰富的全身MRI中产生合成但逼真的全身宠物纹状体。具体来说,我们使用56 $^{18} $ F-FDG-PET/MRI考试的数据集训练3D残差UNET来预测全身T1加权MRI的生理PET摄取。在训练中,我们实施了平衡的损失函数,以在较大的动态范围内产生逼真的吸收,并沿着层析成像线的响应线对模仿宠物的获取产生计算的损失。预测的PET图像预计会产生合成宠物飞行时间(TOF)正式图,可与供应商提供的PET重建算法一起使用,包括使用基于CT的衰减校正(CTAC)和基于MR的衰减校正(MRAC(MRAC) )。由此产生的合成数据概括了生理学$^{18} $ f-fdg摄取,例如高摄取量位于大脑和膀胱,以及肝脏,肾脏,心脏和肌肉的吸收。为了模拟高摄取的异常,我们还插入合成病变。我们证明,该合成PET数据可以与实际PET数据互换使用,用于比较CT和基于MR的衰减校正方法的PET量化任务,与使用真实数据相比,在平均值中实现了$ \ leq 7.6 \%$误差。这些结果共同表明,所提出的合成PET数据管道可以合理地用于开发,评估和验证PET/MRI重建方法。
translated by 谷歌翻译
Multi-view projection techniques have shown themselves to be highly effective in achieving top-performing results in the recognition of 3D shapes. These methods involve learning how to combine information from multiple view-points. However, the camera view-points from which these views are obtained are often fixed for all shapes. To overcome the static nature of current multi-view techniques, we propose learning these view-points. Specifically, we introduce the Multi-View Transformation Network (MVTN), which uses differentiable rendering to determine optimal view-points for 3D shape recognition. As a result, MVTN can be trained end-to-end with any multi-view network for 3D shape classification. We integrate MVTN into a novel adaptive multi-view pipeline that is capable of rendering both 3D meshes and point clouds. Our approach demonstrates state-of-the-art performance in 3D classification and shape retrieval on several benchmarks (ModelNet40, ScanObjectNN, ShapeNet Core55). Further analysis indicates that our approach exhibits improved robustness to occlusion compared to other methods. We also investigate additional aspects of MVTN, such as 2D pretraining and its use for segmentation. To support further research in this area, we have released MVTorch, a PyTorch library for 3D understanding and generation using multi-view projections.
translated by 谷歌翻译
The deployment flexibility and maneuverability of Unmanned Aerial Vehicles (UAVs) increased their adoption in various applications, such as wildfire tracking, border monitoring, etc. In many critical applications, UAVs capture images and other sensory data and then send the captured data to remote servers for inference and data processing tasks. However, this approach is not always practical in real-time applications due to the connection instability, limited bandwidth, and end-to-end latency. One promising solution is to divide the inference requests into multiple parts (layers or segments), with each part being executed in a different UAV based on the available resources. Furthermore, some applications require the UAVs to traverse certain areas and capture incidents; thus, planning their paths becomes critical particularly, to reduce the latency of making the collaborative inference process. Specifically, planning the UAVs trajectory can reduce the data transmission latency by communicating with devices in the same proximity while mitigating the transmission interference. This work aims to design a model for distributed collaborative inference requests and path planning in a UAV swarm while respecting the resource constraints due to the computational load and memory usage of the inference requests. The model is formulated as an optimization problem and aims to minimize latency. The formulated problem is NP-hard so finding the optimal solution is quite complex; thus, this paper introduces a real-time and dynamic solution for online applications using deep reinforcement learning. We conduct extensive simulations and compare our results to the-state-of-the-art studies demonstrating that our model outperforms the competing models.
translated by 谷歌翻译
Recent advances in Neural Radiance Fields (NeRFs) treat the problem of novel view synthesis as Sparse Radiance Field (SRF) optimization using sparse voxels for efficient and fast rendering (plenoxels,InstantNGP). In order to leverage machine learning and adoption of SRFs as a 3D representation, we present SPARF, a large-scale ShapeNet-based synthetic dataset for novel view synthesis consisting of $\sim$ 17 million images rendered from nearly 40,000 shapes at high resolution (400 X 400 pixels). The dataset is orders of magnitude larger than existing synthetic datasets for novel view synthesis and includes more than one million 3D-optimized radiance fields with multiple voxel resolutions. Furthermore, we propose a novel pipeline (SuRFNet) that learns to generate sparse voxel radiance fields from only few views. This is done by using the densely collected SPARF dataset and 3D sparse convolutions. SuRFNet employs partial SRFs from few/one images and a specialized SRF loss to learn to generate high-quality sparse voxel radiance fields that can be rendered from novel views. Our approach achieves state-of-the-art results in the task of unconstrained novel view synthesis based on few views on ShapeNet as compared to recent baselines. The SPARF dataset will be made public with the code and models on the project website https://abdullahamdi.com/sparf/ .
translated by 谷歌翻译
With the recent advances in video and 3D understanding, novel 4D spatio-temporal challenges fusing both concepts have emerged. Towards this direction, the Ego4D Episodic Memory Benchmark proposed a task for Visual Queries with 3D Localization (VQ3D). Given an egocentric video clip and an image crop depicting a query object, the goal is to localize the 3D position of the center of that query object with respect to the camera pose of a query frame. Current methods tackle the problem of VQ3D by lifting the 2D localization results of the sister task Visual Queries with 2D Localization (VQ2D) into a 3D reconstruction. Yet, we point out that the low number of Queries with Poses (QwP) from previous VQ3D methods severally hinders their overall success rate and highlights the need for further effort in 3D modeling to tackle the VQ3D task. In this work, we formalize a pipeline that better entangles 3D multiview geometry with 2D object retrieval from egocentric videos. We estimate more robust camera poses, leading to more successful object queries and substantially improved VQ3D performance. In practice, our method reaches a top-1 overall success rate of 86.36% on the Ego4D Episodic Memory Benchmark VQ3D, a 10x improvement over the previous state-of-the-art. In addition, we provide a complete empirical study highlighting the remaining challenges in VQ3D.
translated by 谷歌翻译
尽管深度神经网络(DNN)已成为多个无处不在的应用程序的骨干技术,但它们在资源受限的机器中的部署,例如物联网(IoT)设备,仍然具有挑战性。为了满足这种范式的资源要求,引入了与IoT协同作用的深入推断。但是,DNN网络的分布遭受严重的数据泄漏。已经提出了各种威胁,包括黑盒攻击,恶意参与者可以恢复送入其设备的任意输入。尽管许多对策旨在实现隐私的DNN,但其中大多数会导致额外的计算和较低的准确性。在本文中,我们提出了一种方法,该方法通过重新考虑分配策略而无需牺牲模型性能来针对协作深度推断的安全性。特别是,我们检查了使该模型容易受到黑盒威胁的不同DNN分区,并得出了应分配每个设备的数据量以隐藏原始输入的所有权。我们将这种方法制定为一种优化,在该方法中,我们在共同推导的延迟与数据级别的数据级别之间建立了权衡。接下来,为了放大最佳解决方案,我们将方法塑造为支持异质设备以及多个DNN/数据集的增强学习(RL)设计。
translated by 谷歌翻译
纯变压器模型在自然语言处理和计算机视觉方面取得了令人印象深刻的成功。但是,变压器的一个限制是它们需要大型培训数据。在3D点云的领域中,大数据集的可用性是一个挑战,它加剧了3D任务的训练变压器问题。在这项工作中,我们凭经验研究和研究利用大量图像的知识以了解点云的理解的效果。我们制定了一条称为\ textIt {pix4point}的管道,该管道允许在图像域中利用预验证的变压器来改善下游点云任务。这是通过用于3D域专门的令牌和解码器层的帮助,通过模态无形的纯变压器主链实现。使用图像预言的变压器,我们分别在Scanobjectnn,ShapenetPart和S3DIS基准上观察到3D点云分类,部分分割和语义分割的任务的Pix4Point的显着性能提高。我们的代码和模型可在:\ url {https://github.com/guochengqian/pix4point}中获得。
translated by 谷歌翻译
随着机器学习和深度学习模型在多种领域变得非常普遍,因此采用决策过程的主要保留是它们的黑盒本质。可解释的人工智能(XAI)范式由于其能够降低模型不透明度的能力而获得了很多动力。 XAI方法不仅增加了利益相关者对决策过程的信任,而且还帮助开发商确保了其公平性。最近的努力用于创建透明的模型和事后解释。但是,对于时间序列数据,开发了更少的方法,而在多元数据集方面甚至更少。在这项工作中,我们利用塑形组的固有解释性来开发模型不可知的多元时间序列(MTS)反事实解释算法。反事实可能会通过指示在输入上必须执行哪些更改以改变最终决定,从而对制作黑框模型产生巨大影响。我们在现实生活中的太阳耀斑预测数据集上测试了我们的方法,并证明我们的方法会产生高质量的反事实。此外,与唯一的MTS反事实生成算法的比较表明,除了视觉上可以解释外,我们的解释在接近性,稀疏性和合理性方面也很出色。
translated by 谷歌翻译
多视图投影方法在3D理解任务等方面表现出有希望的性能,如3D分类和分割。然而,它仍然不明确如何将这种多视图方法与广泛可用的3D点云组合。以前的方法使用未受忘掉的启发式方法在点级别结合功能。为此,我们介绍了多视图点云(vinoint云)的概念,表示每个3D点作为从多个视图点提取的一组功能。这种新颖的3D Vintor云表示将3D点云表示的紧凑性与多视图表示的自然观。当然,我们可以用卷积和汇集操作配备这一新的表示。我们以理论上建立的功能形式部署了Voint神经网络(vointnet),以学习vinite空间中的表示。我们的小说代表在ScanObjectnn,ModelNet40和ShapEnet​​ Core55上实现了3D分类和检索的最先进的性能。此外,我们在ShapeNet零件上实现了3D语义细分的竞争性能。进一步的分析表明,与其他方法相比,求力提高了旋转和闭塞的鲁棒性。
translated by 谷歌翻译
人工智能(AI)见证了各种物联网(IoT)应用和服务的重大突破,从推荐系统到机器人控制和军事监视。这是由更容易访问感官数据的驱动以及生成实时数据流的Zettabytes(ZB)的普遍/普遍存在的设备的巨大范围。使用此类数据流来设计准确的模型,以预测未来的见解并彻底改变决策过程,将普遍的系统启动为有价值的范式,以实现更好的生活质量。普遍的计算和人工智能的汇合普遍AI的汇合将无处不在的物联网系统的作用从主要是数据收集到执行分布式计算,并具有集中学习的有希望的替代方案,带来了各种挑战。在这种情况下,应设想在物联网设备(例如智能手机,智能车辆)和基础架构(例如边缘节点和基站)之间进行明智的合作和资源调度,以避免跨越开销和计算计算并确保最大的性能。在本文中,我们对在普遍AI系统中克服这些资源挑战开发的最新技术进行了全面的调查。具体而言,我们首先介绍了普遍的计算,其架构以及与人工智能的相交。然后,我们回顾AI的背景,应用和性能指标,尤其是深度学习(DL)和在线学习,在无处不在的系统中运行。接下来,我们从算法和系统观点,分布式推理,培训和在线学习任务中,对物联网设备,边缘设备和云服务器的组合进行了分布式推理,培训和在线学习任务的深入文献综述。最后,我们讨论我们的未来愿景和研究挑战。
translated by 谷歌翻译